skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Wang, Yipeng"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available May 1, 2026
  2. Ising computation is an emerging paradigm for efficiently solving the time-consuming Combinatorial Optimization problems (COP). In particular, Compute-In-Memory (CIM) based Ising machines are promising for Hamiltonian computations capturing the spin state dynamics using a dense memory array. However, the advancement of CIM-based Ising machines for accurately solving complex COPs is limited by the lack of full spin-connectivity, small bitwidths of the spin interaction coefficients (J), data movement energy costs within the CIM, and the area/energy overheads of peripheral CIM analog circuits. In this work, we present a data movement-aware, CIM-based Ising machine for efficiently solving COPs. The unique design contributions are: (i) “Ping-Pong” transpose array architecture restricting single-bit spin data movement within the memory array while maintaining interaction coefficients (J) stationary, (ii) Fully connected spins and multi-bit J for >329× faster solution time, (iii) Configurable J bit-widths and spin connectivity to support wide variety of complex COPs. (iv) Bitcell-Reference (BR) based capacitor-bank-less ADC for sensing, occupying 1.81× less area than the baseline SAR ADC. The silicon prototype in 65 nm CMOS demonstrates >4.7× better power efficiency, and >9.8× better area efficiency compared to prior works. 
    more » « less
  3. We show scalar-mean curvature rigidity of warped products of round spheres of dimension at least 2 over compact intervals equipped with strictly log-concave warping functions. This generalizes earlier results of Cecchini-Zeidler to all dimensions. Moreover, we show scalar curvature rigidity of round spheres of dimension at least 3 with two antipodal points removed. This resolves a problem in Gromov's ''Four Lectures'' in all dimensions. Our arguments are based on spin geometry. 
    more » « less
  4. null (Ed.)
  5. null (Ed.)
  6. null (Ed.)
    Network function virtualization (NFV) technologyattracts tremendous interests from telecommunication industryand data center operators, as it allows service providers to assignresource for Virtual Network Functions (VNFs) on demand,achieving better flexibility, programmability, and scalability. Toimprove server utilization, one popular practice is to deploy besteffort (BE) workloads along with high priority (HP) VNFs whenhigh priority VNF’s resource usage is detected to be low. The keychallenge of this deployment scheme is to dynamically balancethe Service level objective (SLO) and the total cost of ownership(TCO) to optimize the data center efficiency under inherentlyfluctuating workloads. With the recent advancement in deepreinforcement learning, we conjecture that it has the potential tosolve this challenge by adaptively adjusting resource allocationto reach the improved performance and higher server utilization.In this paper, we present a closed-loop automation systemRLDRM1to dynamically adjust Last Level Cache allocationbetween HP VNFs and BE workloads using deep reinforcementlearning. The results demonstrate improved server utilizationwhile maintaining required SLO for the HP VNFs. 
    more » « less